121 research outputs found

    iPDA: An Integrity-Protecting Private Data Aggregation Scheme for Wireless Sensor Networks

    Get PDF
    Data aggregation is an efficient mechanism widely used in wireless sensor networks (WSN) to collect statistics about data of interests. However, the shared-medium nature of communication makes the WSNs are vulnerable to eavesdropping and packet tampering/injection by adversaries. Hence, how to protect data privacy and data integrity are two major challenges for data aggregation in wireless sensor networks. In this paper, we present iPDA??????an integrity-protecting private data aggregation scheme. In iPDA, data privacy is achieved through data slicing and assembling technique; and data integrity is achieved through redundancy by constructing disjoint aggregation paths/trees to collect data of interests. In iPDA, the data integrity-protection and data privacy-preservation mechanisms work synergistically. We evaluate the iPDA scheme in terms of the efficacy of privacy preservation, communication overhead, and data aggregation accuracy, comparing with a typical data aggregation scheme--- TAG, where no integrity protection and privacy preservation is provided. Both theoretical analysis and simulation results show that iPDA achieves the design goals while still maintains the efficiency of data aggregation

    Dynamic Voltage Scaling Techniques for Energy Efficient Synchronized Sensor Network Design

    Get PDF
    Building energy-efficient systems is one of the principal challenges in wireless sensor networks. Dynamic voltage scaling (DVS), a technique to reduce energy consumption by varying the CPU frequency on the fly, has been widely used in other settings to accomplish this goal. In this paper, we show that changing the CPU frequency can affect timekeeping functionality of some sensor platforms. This phenomenon can cause an unacceptable loss of time synchronization in networks that require tight synchrony over extended periods, thus preventing all existing DVS techniques from being applied. We present a method for reducing energy consumption in sensor networks via DVS, while minimizing the impact of CPU frequency switching on time synchronization. The system is implemented and evaluated on a network of 11 Imote2 sensors mounted on a truss bridge and running a high-fidelity continuous structural health monitoring application. Experimental measurements confirm that the algorithm significantly reduces network energy consumption over the same network that does not use DVS, while requiring significantly fewer re-synchronization actions than a classic DVS algorithm.unpublishedis peer reviewe

    On Achieving Diversity in the Presence of Outliers in Participatory Camera Sensor Networks

    Get PDF
    This paper addresses the problem of collection and delivery of a representative subset of pictures, in participatory camera networks, to maximize coverage when a significant portion of the pictures may be redundant or irrelevant. Consider, for example, a rescue mission where volunteers and survivors of a large-scale disaster scout a wide area to capture pictures of damage in distressed neighborhoods, using handheld cameras, and report them to a rescue station. In this participatory camera network, a significant amount of pictures may be redundant (i.e., similar pictures may be reported by many) or irrelevant (i.e., may not document an event of interest). Given this pool of pictures, we aim to build a protocol to store and deliver a smaller subset of pictures, among all those taken, that minimizes redundancy and eliminates irrelevant objects and outliers. While previous work addressed removal of redundancy alone, doing so in the presence of outliers is tricky, because outliers, by their very nature, are different from other objects, causing redundancy minimizing algorithms to favor their inclusion, which is at odds with the goal of finding a representative subset. To eliminate both outliers and redundancy at the same time, two seemingly opposite objectives must be met together. The contribution of this paper lies in a new prioritization technique (and its in-network implementation) that minimizes redundancy among delivered pictures, while also reducing outliers.unpublishedis peer reviewe

    Schedulability Analysis for Certification-friendly Multicore Systems

    Get PDF
    This paper presents a new schedulability test for safety-critical software undergoing a transition from single-core to multicore systems - a challenge faced by multiple industries today. Our migration model, consisting of a schedulability test and execution model, is distinguished by three aspects consistent with reducing transition cost. First, it assumes externally-driven scheduling parameters, such as periods and deadlines, remain fixed (and thus known), whereas exact computation times are not. Second, it adopts a globally synchronized conflict-free I/O model that leads to a decoupling between cores, simplifying the schedulability analysis. Third, it employs global priority assignment across all tasks on each core, irrespective of application, where budget constraints on each application ensure isolation. These properties enable us to obtain a utilization bound that places an allowable limit on total task execution times. Evaluation results demonstrate the advantages of our scheduling model over competing resource partitioning approaches, such as Periodic Server and TDMA.Ope

    Passivity Based Control in Software Systems

    Get PDF
    In this paper, we use passivity theory as an approach for dealing with dynamical systems, and demonstrate how to apply it to software systems in a general way. We first cover key results from passivity theory. Then, using an example, simulated system, we demonstrate how to design a controller which guarantees asymptotic BIBO stability for a system using a passivity based control, or PBC, approach. Finally, we examine more complex software systems from other publications, Proteus and Pyro, and demonstrate how to apply passivity theory to these kinds of systems.Ope

    SenseWorld: Towards Cyber-Physical Social Networks

    Full text link
    Web-based social networks such as LinkedIn, FaceBook and MySpace have gained wide popularity in recent years. With the advent of ubiquitous sensing, future social net-works will be cyber-physical, combining measured ele

    An Experimental Evaluation of Datacenter Workloads On Low-Power Embedded Micro Servers

    Get PDF
    This paper presents a comprehensive evaluation of an ultra-low power cluster, built upon the Intel Edison based micro servers. The improved performance and high energy efficiency of micro servers have driven both academia and industry to explore the possibility of replacing conventional brawny servers with a larger swarm of embedded micro servers. Existing attempts mostly focus on mobile-class micro servers, whose capacities are similar to mobile phones. We, on the other hand, target on sensor-class micro servers, which are originally intended for uses in wearable technologies, sensor networks, and Internet-of-Things. Although sensor-class micro servers have much less capacity, they are touted for minimal power consumption (< 1 Watt), which opens new possibilities of achieving higher energy efficiency in datacenter workloads. Our systematic evaluation of the Edison cluster and comparisons to conventional brawny clusters involve careful workload choosing and laborious parameter tuning, which ensures maximum server utilization and thus fair comparisons. Results show that the Edison cluster achieves up to 3.5× improvement on work-done-per-joule for web service applications and data-intensive MapReduce jobs. In terms of scalability, the Edison cluster scales linearly on the throughput of web service workloads, and also shows satisfactory scalability for MapReduce workloads despite coordination overhead.This research was supported in part by NSF grant 13-20209.Ope

    A Generalized Packing Server for Scheduling Task Graphs on Multiple Resources

    Get PDF
    This paper presents the generalized packing server. It reduces the problem of scheduling tasks with precedence constraints on multiple processing units to the problem of scheduling independent tasks. The work generalizes our previous contribution made in the specific context of scheduling Map/Reduce workflows. The results apply to the generalized parallel task model, introduced in recent literature to denote tasks described by workflow graphs, where some subtasks may be executed in parallel subject to precedence constraints. Recent literature developed schedulability bounds for the generalized parallel tasks on multiprocessors. The generalized packing server, described in this paper, is a run-time mechanism that packs tasks into server budgets (in a manner that respects precedence constraints) allowing the budgets to be viewed as independent tasks by the underlying scheduler. Consequently, any schedulability results derived for the independent task model on multiprocessors become applicable to generalized parallel tasks. The catch is that the sum of capacities of server budgets exceeds by a certain ratio the sum of execution times of the original generalized parallel tasks. Hence, a scaling factor is derived that converts bounds for independent tasks into corresponding bounds for generalized parallel tasks. The factor applies to any work-conserving scheduling policy in both the global and partitioned multiprocessor scheduling models. We show that the new schedulability bounds obtained for the generalized parallel task model, using the aforementioned conversion, improve in several cases upon the best known bounds in current literature. Hence, the packing server is shown to improve the schedulability of generalized parallel tasks. Evaluation results confirm this observation.Ope

    Planning and Resource Allocation for Hard Real-time, Fault-Tolerant Plan Execution

    Full text link
    We describe the interface between a real-time resource allocation system with an AI planner in order to create fault-tolerant plans that are guaranteed to execute in hard real-time. The planner specifies the task set and all execution deadlines required to ensure system safety, then the resource utilization. A new interface module combines information from planning and resource allocation to enforce development of plans feasible for execution during a variety of internal system faults. Plans that over-utilize any system resource trigger feedback to the planner, which then searches for an alternate plan. A valid plan for each specified fault, including the nominal no-fault situation, is stored in a plan cache for subsequent real-time execution. We situate this work in the context of CIRCA, the Cooperative Intelligent Real-time Control Architecture, which focuses on developing and scheduling plans that make hard real-time safety guarantees, and provide an example of an autonomous aircraft agent to illustrate how our planner-resource allocation interface improves CIRCA performance.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/44010/1/10458_2004_Article_318111.pd
    corecore